41 research outputs found

    Towards Teaching a Robot to Count Objects

    Get PDF
    We present here an example of incremental learning between two computational models dealing with different modalities: a model allowing to switch spatial visual attention and a model allowing to learn the ordinal sequence of phonetical numbers. Their merging via a common reward signal allows anyway to produce a cardinal counting behaviour that can be implemented on a robot

    A Computational Model of Spatial Memory Anticipation during Visual Search

    Get PDF
    Some visual search tasks require to memorize the location of stimuli that have been previously scanned. Considerations about the eye movements raise the question of how we are able to maintain a coherent memory, despite the frequent drastically changes in the perception. In this article, we present a computational model that is able to anticipate the consequences of the eye movements on the visual perception in order to update a spatial memor

    A Computational Model of Basal Ganglia and its Role in Memory Retrieval in Rewarded Visual Memory Tasks

    Get PDF
    Visual working memory (WM) tasks involve a network of cortical areas such as inferotemporal, medial temporal and prefrontal cortices. We suggest here to investigate the role of the basal ganglia (BG) in the learning of delayed rewarded tasks through the selective gating of thalamocortical loops. We designed a computational model of the visual loop linking the perirhinal cortex, the BG and the thalamus, biased by sustained representations in prefrontal cortex. This model learns concurrently different delayed rewarded tasks that require to maintain a visual cue and to associate it to itself or to another visual object to obtain reward. The retrieval of visual information is achieved through thalamic stimulation of the perirhinal cortex. The input structure of the BG, the striatum, learns to represent visual information based on its association to reward, while the output structure, the substantia nigra pars reticulata, learns to link striatal representations to the disinhibition of the correct thalamocortical loop. In parallel, a dopaminergic cell learns to associate striatal representations to reward and modulates learning of connections within the BG. The model provides testable predictions about the behavior of several areas during such tasks, while providing a new functional organization of learning within the BG, putting emphasis on the learning of the striatonigral connections as well as the lateral connections within the substantia nigra pars reticulata. It suggests that the learning of visual WM tasks is achieved rapidly in the BG and used as a teacher for feedback connections from prefrontal cortex to posterior cortices

    a code generation approach to neural simulations on parallel hardware

    Get PDF
    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions

    Emergence of Attention within a Neural Population

    Get PDF
    We present a dynamic model of attention based on the Continuum Neural Field Theory that explains attention as being an emergent property of a neural population. This model is experimentally proved to be very robust and able to track one static or moving target in the presence of very strong noise or in the presence of a lot of distractors, even more salient than the target. This attentional property is not restricted to the visual case and can be considered as a generic attentional process of any spatio-temporal continuous input

    Implicit neural representations for deep drawing and joining experiments

    Get PDF
    geometries in many industrial applications. Although simulations using Finite Element Methods (FEM) have helped in steering toward that goal, they are particularly time-consuming for large 3D meshes. Searching for the process parameters that lead to the desired shape of a metal part can become extremely expensive in terms of man-hours and computational resources. We investigated how machine learning models, especially deep neural networks, can help in speeding up the design process of deep drawing and joining processes by allowing a fast interpolation of FEM simulations from minutes or hours to seconds. In this study, inspired by implicit representations of 3D objects using neural networks, an implicit approach is used to predict local properties such as the thickness of the metal sheet, its thinning, and plastic strain, using solely the process parameters defining the experiment. We observe that the low number of trainable parameters of the predicting model ensures a generalization to unseen process parameters and ultimately allows for a reliable fast inspection of the processes

    A distributed computational model of spatial memory anticipation during a visual search task

    Get PDF
    Some visual search tasks require the memorization of the location of stimuli that have been previously focused. Considerations about the eye movements raise the question of how we are able to maintain a coherent memory, despite the frequent drastic changes in the perception. In this article, we present a computational model that is able to anticipate the consequences of eye movements on visual perception in order to update a spatial working memory

    Visual Category learning by means of Basal Ganglia

    Get PDF

    Reducing connectivity by using cortical modular bands

    Get PDF
    The way information is represented and processed in a neural network may have important consequences on its computational power and complexity. Basically, information representation refers to distributed or localist encoding and information processing refers to schemes of connectivity that can be complete or minimal. In the past, theoretical and biologically inspired approaches of neural computation have insisted on complementary views (respectively distributed and complete versus localist and minimal) with complementary arguments (complexity versus expressiveness). In this paper, we report experiments on biologically inspired neural networks performing sensorimotor coordination that indicate that a localist and minimal view may have good performances if some connectivity constraints (also coming from biological inspiration) are respected

    Sustainable computational science: the ReScience initiative

    Get PDF
    Computer science o ers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results, however computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel con dent their research is reproducible. But this is not exactly true. Jonathan Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. e actual scholarship is the full so ware environment, code, and data that produced the result. is implies new work ows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested, hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically di erent from other traditional scienti c journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and so ware tests
    corecore